Stochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations.
Mathematically, this refers to solving:
where the objective is to find the parameter , which minimizes for some unknown random variable, . Denoting as the dimension of the parameter , we can assume that while the domain is known, the objective function, , cannot be computed exactly, but instead approximated via simulation. This can be intuitively explained as follows. is the original function we want to minimize. However, due to noise, can not be evaluated exactly. This situation is modeled by the function , where represents the noise and is a random variable. Since is a random variable, so is the value of . The objective is then to minimize , but through evaluating . A reasonable way to do this is to minimize the expectancy of , i.e., .
The first, and prototypical, algorithms of this kind are the Robbins-Monro and Kiefer-Wolfowitz algorithms.
Contents |
The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro,[1] presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function , and a constant , such that the equation has a unique root at . It is assumed that while we cannot directly observe the function , we can instead obtain measurements of the random variable where . The structure of the algorithm is to then generate iterates of the form:
Here, is a sequence of positive step sizes. Robbins and Monro proved [1], Theorem 2 that converges in (and hence also in probability) to provided that:
A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Munro, have the form: , for . Other series are possible but in order to average out the noise in , the above condition must be met.
While the Robbins-Monro algorithm is theoretically able to achieve under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning.[3][5]
To overcome this shortfall, Polyak and Juditsky,[6] presented a method of accelerating Robbins-Monro through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:
The convergence of to the unique root relies on the condition that the step sequence decreases sufficiently slowly. That is
Therefore, the sequence with satisfies this restriction, but does not, hence the longer steps. Under the assumptions outlined in the Robbins-Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate yet with a more robust step size policy.[6]
Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin [7] for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate .
The Kiefer-Wolfowitz algorithm,[8] was introduced in 1952, and was motivated by the publication of the Robbins-Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function. Let be a function which has a maximum at the point . It is assumed that is unknown, however, certain observations , where , can be made at any point . The structure of the algorithm follows a gradient-like method, with the iterates being generated as follows:
where the gradient of is approximated using finite differences. The sequence specifies the sequence of finite difference widths used for the gradient approximation, while the sequence specifies a sequence of positive step sizes taken along that direction. Kiefer and Wolfowitz proved that, if satisfied certain regularity conditions, then will converge to provided that:
A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would be and .
An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on.[10][11] These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size should not converge to zero but should be chosen so as to track the function.[10], 2nd ed., chapter 3
C. Johan Masreliez and R. Douglas Martin were the first to apply stochastic approximation to robust estimation.[12]